US regulators have been considering making AI-generated robocalls illegal, with the fake Biden call giving the effort new impetus. (AP)AI 

Election disinformation fears escalate as Biden robocall featuring audio deepfake emerges

There is growing concern about the potential for a flood of AI-powered disinformation during the 2024 White House race. A recent robocall, in which US President Joe Biden was impersonated, has raised significant alarm, especially regarding the use of audio deepfakes.

“What a blast,” said the phone message, which digitally spoofed Biden’s voice and repeated one of his signature phrases.

A robocall urged New Hampshire residents not to vote in the Democratic primary last month, prompting state officials to launch an investigation into possible voter suppression.

It also led to calls from campaigners for tighter guardrails around generative AI tools or an outright ban on robocalls.

Disinformation researchers fear rampant abuse of AI-based apps in a key election year thanks to the proliferation of voice-cloning tools that are cheap, easy to use and difficult to trace.

“This is definitely the tip of the iceberg,” Vijay Balasubramaniyan, CEO and founder of cybersecurity firm Pindrop, told AFP.

“We can expect to see many more deep fakes this election season.”

According to a detailed analysis published by Pindrop, a text-to-speech system developed by artificial intelligence voice cloning startup ElevenLabs was used to create Biden’s robocall.

The scandal comes as campaigners on both sides of the US political aisle harness advanced artificial intelligence tools for effective campaign communications and tech investors pump millions of dollars into voice-cloning startups.

Balasubramaniayan declined to say whether Pindrop had shared its findings with ElevenLabs, which last month announced an investor round of funding that valued the company at $1.1 billion, according to Bloomberg News.

ElevenLabs did not respond to AFP’s repeated requests for comment. Its website directs users to a free text-to-speech generator that “instantly creates natural AI voices in any language.”

Under its safety guidelines, the company said users could create voice clones of political figures like Donald Trump without their permission if they “expressed humor or mockery” in a way that makes it clear to the listener that what they’re hearing is a parody, not genuine content.”

“Election Chaos”

US regulators have considered making AI-generated robocalls illegal, and the fake Biden call has given the company a new boost.

“The political deepfake moment is here,” said Robert Weissman, president of Public Citizen.

“Policymakers need to start putting safeguards in place or we’re headed for election chaos. The New Hampshire deepfake is a reminder of the many ways deepfakes can sow confusion.”

Researchers worry about the impact of artificial intelligence tools that create videos and text so seemingly real that voters have a hard time distinguishing truth from fiction, undermining trust in the electoral process.

However, the biggest concern has been voice fakes, which are used by celebrities and politicians in different parts of the world.

“Of all the surfaces — video, image, voice — that AI can use to suppress voters, voice is the biggest vulnerability,” Tim Harper, senior policy analyst at the Center for Democracy & Technology, told AFP.

“It’s easy to clone a voice with artificial intelligence, and it’s hard to recognize it.”

“Electoral Integrity”

The ease with which fake audio content can be created and distributed complicates an already hyper-polarized political landscape, undermines trust in the media and allows anyone to claim that factual “evidence has been fabricated,” Wasim Khaled, CEO of Blackbird.AI, told AFP.

Such concerns are common as the proliferation of AI voice tools overtakes recognition software.

China’s ByteDance, the owner of the wildly popular TikTok platform, recently unveiled StreamVoice, an artificial intelligence tool that transforms a user’s voice in real-time according to their preferred option.

“Although the attackers used ElevenLab this time, it is likely to be a different generative AI system in future attacks,” Balasubramaniyan said.

“It is imperative that these tools have adequate safeguards.”

Balasubramaniya and other researchers recommended building audio watermarks, or digital signatures, into tools as possible safeguards, as well as regulation that makes them accessible only to authenticated users.

“Even with these measures, it’s really difficult and really expensive to detect when these tools are being used to produce malicious content that violates your terms of service,” Harper said.

“(It) requires investment in trust and security and a commitment to building in such a way that election integrity is the focus of risk.”

Also read these top stories today:

Elon Musk’s Neuralink problems over? Well, Neuralink’s challenges are far from over. Implanting the device in a human is just the beginning of a decades-long clinical project fraught with competitors, financial obstacles and ethical dilemmas. Read all about it here. Was it interesting? Go ahead and share it with everyone you know.

Cyber criminals take down deep fake video scam! Fraudsters defrauded a multinational company of about $26 million by impersonating senior executives using deep-fake technology, Hong Kong police said Sunday, in one of the first such cases in the city. Find out how they did it here. If you enjoyed reading this article, please share it with your friends and family.

Facebook founder Mark Zuckerberg apologized to the families of children who were abused online. But it’s not enough. Here’s what US lawmakers need to force social media companies to do now. Dive here.

Related posts

Leave a Comment